Tutorial | Simple Instructions for AI Drawing Software Draw Things

(Due to the interface update of DT, the previously prepared screenshots cannot be used, which has delayed some time, please understand)

Since I use an iPad, the screenshots below are all from the DT software interface on the iPad. If you are using a MacBook, the interface should be similar. The interface of DT on the iPhone will be different, you can explore it yourself, I will not go into details here. I will only talk about using DT on the iPad.

After downloading and installing DT, the first time you enter the program, it should look like this.

4v1ui

Pay attention to the menu on the left side.

94at1

I won’t go into detail about what all these menu options on the left mean. I will explain them as needed. The “Basic” section contains the basic parameters used for AI drawing, and similarly, the “Advanced” section contains advanced parameters (the old version had them together). “All” contains both basic and advanced parameters.

In addition, DT will automatically set the display language based on the system language. If you want to switch languages, you can click on the icon shown in the figure below to switch languages.

75svl

Clicking on the icon in the lower left corner of the red circle will bring up a “Machine Settings” window. Scroll down to find “Language Selection” to switch to the desired language. You need to restart for it to take effect, which means you need to clear the program from memory and then reopen it.

In fact, the first thing to do when opening DT for the first time is to download AI drawing models. DT has built-in links to a large number of models, which can be downloaded directly without the need for a VPN. Which model should you download from so many options? My suggestion is to start with this one, I will use this model to introduce the basic usage of DT. Later, I will talk about the characteristics of different models I have used in more detail.

Download this model called Realistic Vision v3.0(8 bit), which is also a well-known model. The other models in the list are ones I downloaded and imported myself, so you don’t need to pay attention to them for now. The green box in the picture controls whether to display the list or large icons.

Speaking of models, let me tell you what I know about them. Stable Diffusion, the mainstream model used to be SD1.5 version, now it is SDXL 1.0 version. There are many models trained based on these two versions. Compared to the SD1.5 version, the SDXL 1.0 version has a stronger understanding of semantics and better image quality, and it does not require a lot of image quality content in the prompts. However, SDXL has higher hardware requirements and longer image generation time. I recommend downloading this model trained based on the SD1.5 version, which is completely sufficient for explaining the use of DT. In addition, this model is 8-bit. I don’t understand the specific technical details, but of course, the 16-bit model will have better image quality. However, the 16-bit model is larger in size and takes longer to generate images. The improvement in image quality is almost imperceptible to the naked eye (the author of DT has compared the images), so 8-bit is also sufficient.

After downloading the model, you can start drawing.

72pcf

These two boxes, the left one is the positive prompt input box, where you write what you want AI to generate, and the right one is the negative prompt, where you write what you don’t want AI to generate.

The meaning of the positive prompt I wrote is a woman, sports top, yoga pants, realistic photo, high definition, 8K, high resolution, detailed face, full body shot, looking at the camera. The negative prompt is to specify not to generate low-quality images, not to generate bad hands or fingers, and to avoid deformities (although even with this description, the probability of generating deformed hands is still high).

For convenience, let’s click on “All” in the left column menu so that all parameters are in the list on the right. Pay attention to the settings of the following related parameters. The model is the one we just downloaded. “LoRA” is disabled, “Control” is disabled, these two functions will be explained later. “Strength” is a key parameter, and when generating images, “Strength” is set to 100%.

m0ou4

Scroll down the parameter list, set “Image Size” to 512x768. If the image size is too small, the quality of the generated image will also be poor. Speaking of image size, let me say a few more words. The SD1.5 version of the model natively supports a size of 512x512, and the SDXL version natively supports a size of 1024x1024. My experience is that setting a too large size may not necessarily improve the image quality, and it may even result in more deformities in the generated image. That’s why each model’s trainer gives a recommended image size. We are using the SD1.5 version of the model here, and we want to generate full-body shots, so a vertical rectangle size of 512x768 is more suitable. If you really need a large image size, you mainly need to use the “Amplifier” function, which will be explained later.

Next, another important parameter is “Steps”. Because I am using an iPad pro, which cannot be compared with a computer equipped with Nvidia 4090, I usually set “Steps” to 20, which I feel is sufficient. The higher the number of steps, the more delicate the generated image, and the more details it will have, but the time it takes to generate the image will also increase exponentially.

ksvxa

Continue scrolling down the parameter list, set “Text Guidance” to 7. For the models I often use, the recommended “Text Guidance” value is 10. Setting it to 7 here is fine.

(Note: I will talk about whatever comes to my mind. The DT interface has text explanations for these parameters. The gray text below each parameter is the explanation of the parameter. Everyone should read it carefully.)

Set the “Sampler” as shown in the figure. There are very professional articles about “Sampler” on Zhihu, you can search and read them. Honestly, I read them and didn’t understand much (^_^;)

Set “Clip Skip” to 2. Please refer to the gray text explanation below the parameter for the specific meaning. In my experience, in most cases, it is set to 2, and in a few cases, it can be set to 1.

k4thz

Next is “High-Resolution Repair”, which can be enabled. There is a “Face Repair” above this parameter, let’s disable it for now. I haven’t used this feature, and many model trainers also strongly recommend disabling it. I will talk about better face repair methods later.

The other parameters that haven’t been mentioned can be left at their default values.

Then you can click the “Generate” button to generate the image.

26mhz

As shown in the above figure, the main window displaying the image will have a blue progress bar, and a small preview image will be displayed in the lower left corner of the window.

61s3o

Once the generation is complete, the generated image will be displayed in this main window. Take a close look at the face, it is not very clear. If you generate a portrait where the face occupies a large proportion of the image, usually the facial details will be rich and not blurry or deformed, so there is basically no need for face repair. But in this case, a full-body shot is generated, and the face occupies a small proportion of the image, so in this situation, the face may be blurry or even the features may be deformed, so face repair is needed.

This is a good opportunity to explain how to perform face repair.

4pqyf

You can use the pinch gesture to zoom in on the generated image, and you can see that the face is indeed blurry and the details are not good, and the nose and mouth seem to be deformed.

zhb2h

Click on the eraser icon at the bottom left of the image window, and use the erasing function in freehand mode to erase the face in the image, I also erased the ears.

o0q3p

Next is a crucial step, adjust the “Strength” to 40%, and the erased part of the face on the right side also becomes semi-transparent. In fact, this strength value can be adjusted to 10% to 80%, and the smaller the value, the closer the generated face will be to the original one. When repairing the face, you can try different values to find the most satisfactory result.

Then click the generate button.

4wspi

Then AI starts generating, pay attention to the text marked in the red circle, it says “Image Synthesis + Repair”.

g80vb

It’s done, take a look at the face, is the detail richer, not blurry, and no deformation.

Click the eraser icon again to exit the erasing mode, so you can pinch to zoom in and out of the image again. The icon circled in red at the top of the interface can save the generated image to the device’s album. The red box on the right side retains all the generated images or operations, and clicking on it can jump to any previous step.

Finally, let’s take a look at the generated image.

sv1cp

It may not be a great picture (^_^;), personally, I think it is due to the model used and the prompts are actually too simple, without any description of the scene, lighting, style, etc., and without checking the prompt format recommended by the model trainer. This model theoretically can generate more exquisite images. Here, I mainly demonstrated the basic operations of drawing with DT. As for how to generate more exquisite images, it will be discussed later.

I generated an image using my commonly used model (trained based on SDXL), under the same prompts, and without any face repair, the result is much better.

763ly

That’s it for this article, a bit regretful, obviously it is a good model, although it is SD1.5 version.

So I made a slight modification to the prompts, added “bright soft light, in gym”, and then tried it, and the generated image did improve a lot, and there was no need to do face repair.

tf0dp

The red line is the added prompt. This shows that besides the model, the prompts also have a significant impact on the quality of the final generated image. I generated a few new images and posted them.

ea5st

yl8or

3tt02

beeso

zzk5r

91pb7

kqt42

8vq83

That’s it for now, next time I will talk about the basic operations of generating images and doodling. See you next time.

(˶‾᷄ ⁻̫ ‾᷅˵)